383 research outputs found

    Markovian stochastic approximation with expanding projections

    Full text link
    Stochastic approximation is a framework unifying many random iterative algorithms occurring in a diverse range of applications. The stability of the process is often difficult to verify in practical applications and the process may even be unstable without additional stabilisation techniques. We study a stochastic approximation procedure with expanding projections similar to Andrad\'{o}ttir [Oper. Res. 43 (1995) 1037-1048]. We focus on Markovian noise and show the stability and convergence under general conditions. Our framework also incorporates the possibility to use a random step size sequence, which allows us to consider settings with a non-smooth family of Markov kernels. We apply the theory to stochastic approximation expectation maximisation with particle independent Metropolis-Hastings sampling.Comment: Published in at http://dx.doi.org/10.3150/12-BEJ497 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    On the ergodicity properties of some adaptive MCMC algorithms

    Full text link
    In this paper we study the ergodicity properties of some adaptive Markov chain Monte Carlo algorithms (MCMC) that have been recently proposed in the literature. We prove that under a set of verifiable conditions, ergodic averages calculated from the output of a so-called adaptive MCMC sampler converge to the required value and can even, under more stringent assumptions, satisfy a central limit theorem. We prove that the conditions required are satisfied for the independent Metropolis--Hastings algorithm and the random walk Metropolis algorithm with symmetric increments. Finally, we propose an application of these results to the case where the proposal distribution of the Metropolis--Hastings update is a mixture of distributions from a curved exponential family.Comment: Published at http://dx.doi.org/10.1214/105051606000000286 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Quantitative convergence rates for sub-geometric Markov chains

    Full text link
    We provide explicit expressions for the constants involved in the characterisation of ergodicity of sub-geometric Markov chains. The constants are determined in terms of those appearing in the assumed drift and one-step minorisation conditions. The result is fundamental for the study of some algorithms where uniform bounds for these constants are needed for a family of Markov kernels. Our result accommodates also some classes of inhomogeneous chains.Comment: 14 page

    A Region-Dependent Gain Condition for Asymptotic Stability

    Full text link
    A sufficient condition for the stability of a system resulting from the interconnection of dynamical systems is given by the small gain theorem. Roughly speaking, to apply this theorem, it is required that the gains composition is continuous, increasing and upper bounded by the identity function. In this work, an alternative sufficient condition is presented for the case in which this criterion fails due to either lack of continuity or the bound of the composed gain is larger than the identity function. More precisely, the local (resp. non-local) asymptotic stability of the origin (resp. global attractivity of a compact set) is ensured by a region-dependent small gain condition. Under an additional condition that implies convergence of solutions for almost all initial conditions in a suitable domain, the almost global asymptotic stability of the origin is ensured. Two examples illustrate and motivate this approach
    • …
    corecore